18 research outputs found
The Swedish Parallel Sounding Method State of the Art
In the sixties the Swedish Hydrographic Department developed a new method for hydrographic surveys. Instead of a single ship, a formation of up to nine vessels, navigating along parallel lines, was used for the echo sounding work. The formation consists of a mother vessel, positioned from the shore stations, and the satellite boats, positioned from the mother vessel. During the years the method has been refined in various aspects and is today a powerful instrument with an efficiency remarkably superior to that achieved in earlier days. The contributions to the increase in efficiency derive from such sources as equipment, like vessels, positioning systems and echo sounders, from computer and computer programs, and from new methods for calibration, surveying, evaluation of collected data, etc
Sea Surveying a Probability Model
The ambition to optimize all hydrographic work has today become very important. This is the logical outcome of the increased demand for closely spaced lines of soundings and the high cost of all operational work. This probability model is presented in order to make planning easier and analysing of the surveying operations more accurate. To the best of my knowledge such a model has never been constructed. By introducing some of the variable parameters in hydrographic surveying into the model a probabilistic analysis is achieved
Long-Term Localization for Self-Driving Cars
Long-term localization is hard due to changing conditions, while relative localization within time sequences is much easier. To achieve long-term localization in a sequential setting, such as, for self-driving cars, relative localization should be used to the fullest extent, whenever possible.This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.GNSS and visual localization are identified as areas to improve.\ua0* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.\ua0* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.\ua0* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.\ua0* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.\ua0* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.\ua0* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization
Using a single band GNSS receiver to improve relative positioning in autonomous cars
We show how the combination of a single band global navigation satellite systems (GNSS) receiver, standard automotive level inertial measurement unit (IMU), and wheel speed sensors, can be used for relative positioning with accuracy on a decimeter scale. It is realized without the need for expensive dual band receivers, base stations or long initialization times. This is implemented and evaluated in a natural driving environment against a reference systems and against two simple base line systems; one using only IMU and wheel speed sensors, the other also adding basic GNSS. The proposed solution provides substantially slower error growth than either of the two base line systems
Using Image Sequences for Long-Term Visual Localization
Estimating the pose of a camera in a known scene, i.e., visual localization, is a core task for applications such as self-driving cars. In many scenarios, image sequences are available and existing work on combining single-image localization with odometry offers to unlock their potential for improving localization performance. Still, the largest part of the literature focuses on single-image localization and ignores the availability of sequence data. The goal of this paper is to demonstrate the potential of image sequences in challenging scenarios, e.g., under day-night or seasonal changes. Combining ideas from the literature, we describe a sequence-based localization pipeline that combines odometry with both a coarse and a fine localization module. Experiments on long-term localization datasets show that combining single-image global localization against a prebuilt map with a visual odometry / SLAM pipeline improves performance to a level where the extended CMU Seasons dataset can be considered solved. We show that SIFT features can perform on par with modern state-of-the-art features in our framework, despite being much weaker and a magnitude faster to compute. Our code is publicly available at github.com/rulllars
A Cross-Season Correspondence Dataset for Robust Semantic Segmentation
In this paper, we present a method to utilize 2D-2D point matches between
images taken during different image conditions to train a convolutional neural
network for semantic segmentation. Enforcing label consistency across the
matches makes the final segmentation algorithm robust to seasonal changes. We
describe how these 2D-2D matches can be generated with little human interaction
by geometrically matching points from 3D models built from images. Two
cross-season correspondence datasets are created providing 2D-2D matches across
seasonal changes as well as from day to night. The datasets are made publicly
available to facilitate further research. We show that adding the
correspondences as extra supervision during training improves the segmentation
performance of the convolutional neural network, making it more robust to
seasonal changes and weather conditions.Comment: In Proc. CVPR 201
Vehicle self-localization using off-the-shelf sensors and a detailed map
In the research on autonomous vehicles, self-localization is an important problem to solve. In this paper we present a localization algorithm based on a map and a set of off-the-shelf sensors, with the purpose of evaluating this low-cost solution with respect to localization performance. The used test vehicle is equipped with a Global Positioning System receiver, a gyroscope, wheel speed sensors, a camera providing information about lane markings, and a radar detecting landmarks along the road. Evaluation shows that the localization result is within or close to the requirements for autonomous driving when lane markers and good radar landmarks are present. However, it also indicates that the solution is not robust enough to handle situations when one of these information sources is absent
Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization
Long-term visual localization is the problem of estimating the camera pose of a given query image in a scene whose appearance changes over time. It is an important problem in practice, for example, encountered in autonomous driving. In order to gain robustness to such changes, long-term localization approaches often use segmantic segmentations as an invariant scene representation, as the semantic meaning of each scene part should not be affected by seasonal and other changes. However, these representations are typically not very discriminative due to the limited number of available classes. In this paper, we propose a new neural network, the Fine-Grained Segmentation Network (FGSN), that can be used to provide image segmentations with a larger number of labels and can be trained in a self-supervised fashion. In addition, we show how FGSNs can be trained to output consistent labels across seasonal changes. We demonstrate through extensive experiments that integrating the fine-grained segmentations produced by our FGSNs into existing localization algorithms leads to substantial improvements in localization performance
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Visual localization enables autonomous vehicles to navigate in their
surroundings and augmented reality applications to link virtual to real worlds.
Practical visual localization approaches need to be robust to a wide variety of
viewing condition, including day-night changes, as well as weather and seasonal
variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera
pose estimates. In this paper, we introduce the first benchmark datasets
specifically designed for analyzing the impact of such factors on visual
localization. Using carefully created ground truth poses for query images taken
under a wide variety of conditions, we evaluate the impact of various factors
on 6DOF camera pose estimation accuracy through extensive experiments with
state-of-the-art localization approaches. Based on our results, we draw
conclusions about the difficulty of different conditions, showing that
long-term localization is far from solved, and propose promising avenues for
future work, including sequence-based localization approaches and the need for
better local features. Our benchmark is available at visuallocalization.net.Comment: Accepted to CVPR 2018 as a spotligh
Long-Term Visual Localization Revisited
Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing conditions, including day-night changes, as well as weather and seasonal variations, while providing highly accurate six degree-of-freedom (6DOF) camera pose estimates. In this paper, we extend three publicly available datasets containing images captured under a wide variety of viewing conditions, but lacking camera pose information, with ground truth pose information, making evaluation of the impact of various factors on 6DOF camera pose estimation accuracy possible. We also discuss the performance of state-of-the-art localization approaches on these datasets. Additionally, we release around half of the poses for all conditions, and keep the remaining half private as a test set, in the hopes that this will stimulate research on long-term visual localization, learned local image features, and related research areas. Our datasets are available at visuallocalization.net, where we are also hosting a benchmarking server for automatic evaluation of results on the test set. The presented state-of-the-art results are to a large degree based on submissions to our server